近年来,基于深度学习的平行成像(PI)取得了巨大进展,以加速磁共振成像(MRI)。然而,现有方法的性能和鲁棒性仍然可以是不受欢迎的。在这项工作中,我们建议通过柔性PI重建,创建的重量K-Space Genera-Tive模型(WKGM)来探索K空间域学习。具体而言,WKGM是一种通用的K空间域模型,在其中有效地纳入了K空间加权技术和高维空间增强设计,用于基于得分的Genererative模型训练,从而实现良好和强大的重建。此外,WKGM具有灵活性,因此可以与各种传统的K空间PI模型协同结合,从而产生基于学习的先验以产生高保真重建。在具有不同采样模式和交流电因子的数据集上进行实验性重新构建表明,WKGM可以通过先验良好的K-Space生成剂获得最新的重建结果。
translated by 谷歌翻译
本文提供了统一的观点来解释不同的对抗攻击和防御方法,\ emph {i.e.} DNN的输入变量之间的多阶交互的视图。根据多阶互动,我们发现对抗性攻击主要影响愚弄DNN的高阶相互作用。此外,我们发现前列培训的DNN的鲁棒性来自特定于类别的低阶交互。我们的研究结果提供了统一对抗的扰动和鲁棒性的潜在方法,可以以原则方式解释现有的防御方法。此外,我们的调查结果还修订了先前的不准确了解对抗普遍学习特征的偏差。
translated by 谷歌翻译
本文提供了一个统一的观点来解释不同的逆势攻击和防御方法,即DNN的输入变量之间的多阶交互的视图。根据多阶互动,我们发现对抗性攻击主要影响愚弄DNN的高阶相互作用。此外,我们发现前列培训的DNN的鲁棒性来自特定于类别的低阶交互。我们的研究结果提供了统一对抗的扰动和鲁棒性的潜在方法,可以以原则方式解释现有的防御方法。此外,我们的调查结果还修订了先前的不准确了解对抗普遍学习特征的偏差。
translated by 谷歌翻译
缺陷检测在综合电路(ICS)的制造过程中起着至关重要的作用。模具附件和电线粘结是制造过程的两个步骤,这些步骤确定了IC中的功率和信号传输质量和可靠性。本文介绍了基于使用的不同感应方式(包括光学,放射学,声学和红外热力学)的不同感应方式来检测这些缺陷的方法的调查或文献综述。本调查提供了对所使用的检测方法的讨论。常规学习方法和深度学习方法用于检测模具依恋和电线粘结缺陷以及挑战和未来的研究方向。
translated by 谷歌翻译
模具分析是一种必不可少的数字方法,以及古代经济史的重要工具。然而,手动模具研究过于劳动密集型,可以全面研究罗马帝国等大型币。我们通过提出无监督计算模具分析的模型来解决这个问题,这可以减少大规模模具研究所需的时间投资,在许多情况下从多年到几周内完成了几个数量级。从计算机视觉观点来看,DIE研究提出了一个挑战的无监督的聚类问题,因为它们涉及一个不明显的和大量的高度相似的语义类别的不平衡尺寸。我们通过确定从贝叶斯距离聚类框架中的专门设计的基于高斯进程的关键点特征的硬币面之间的硬币面之间的异常来解决这些问题。通过分析1135罗马银币在64-66 C.E中进行了分析来证明我们的方法的功效。
translated by 谷歌翻译
The standard semantics of multi-agent epistemic logic S5 is based on Kripke models whose accessibility relations are reflexive, symmetric and transitive. This one dimensional structure contains implicit higher-dimensional information beyond pairwise interactions, that we formalized as pure simplicial models in a previous work (Information and Computation, 2021). Here we extend the theory to encompass simplicial models that are not necessarily pure. The corresponding class of Kripke models are those where the accessibility relation is symmetric and transitive, but might not be reflexive. Such models correspond to the epistemic logic KB4 . Impure simplicial models arise in situations where two possible worlds may not have the same set of agents. We illustrate it with distributed computing examples of synchronous systems where processes may crash.
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译